准确的运动和深度恢复对于包括自动驾驶在内的许多机器人视觉任务很重要。以前的大多数研究都通过预定义的损失函数或跨域预测实现了合作的多任务相互作用。本文提出了一种多任务方案,该方案通过我们的流动深度(F2D),深度流动(D2F)和指数移动平均值(EMA)来实现相互帮助。 F2D和D2F机制可以基于可区分的浅网,可以在光流和深度域之间进行多尺度信息集成。双头机制用于基于分裂方式的刚性和非刚性运动来预测光流,从而显着改善了光流估计的性能。此外,为了使预测更加稳健和稳定,EMA用于我们的多任务培训。 KITTI数据集的实验结果表明,我们的多任务方案优于其他多任务方案,并为预测结果提供了明显的改进。
translated by 谷歌翻译
深度估计是在机器人手术和腹腔镜成像系统中进行图像引导干预的关键步骤。由于对于腹腔镜图像数据很难获得人均深度地面真相,因此很少将监督深度估计应用于手术应用。作为替代方案,已经引入了仅使用同步的立体图像对来训练深度估计器。但是,最近的工作集中在2D中的左右一致性上,而忽略了现实世界坐标中对象的宝贵固有3D信息,这意味着左右3D几何结构一致性尚未得到充分利用。为了克服这一限制,我们提出了M3Depth,这是一种自我监督的深度估计器,以利用3D几何结构信息隐藏在立体声对中,同时保持单眼推理。该方法还消除了在至少一个立体声图像中通过掩码看不见的边界区域的影响,以增强重叠区域中的左图和右图像之间的对应关系。密集实验表明,我们的方法在公共数据集和新获取的数据集上的以前的自我监督方法都大大优先,这表明在不同的样品和腹腔镜上都有良好的概括。
translated by 谷歌翻译
由于医学成像社区缺乏质量注释,半监督学习方法在图像语义分割任务中受到高度重视。在本文中,提出了一种先进的一致性感知伪标签的自我同学方法,以充分利用视觉变压器(VIT)和卷积神经网络(CNN)的力量。我们提出的框架由一个功能学习模块组成,该模块由VIT和CNN相互增强,以及一个适合一致性意识的指导模块。伪标签是通过特征学习模块中的CNN和VIT的视图来重复和分别使用的,以扩展数据集,并且相互有益。同时,为特征学习模块设计了扰动方案,并利用平均网络权重来开发指导模块。通过这样做,该框架结合了CNN和VIT的特征学习强度,通过双视图共同训练增强性能,并以半监督的方式实现一致性的监督。对CNN和VIT的所有替代监督模式进行了拓扑探索,经过详细验证,证明了我们在半监督医学图像分割任务上的最有希望的性能和特定设置。实验结果表明,所提出的方法在带有各种指标的公共基准数据集上实现了最先进的性能。该代码公开可用。
translated by 谷歌翻译
可变形的图像注册,估计不同图像之间的空间转换,是医学成像中的重要任务。许多先前的研究都使用基于学习的方法进行多阶段注册来执行3D图像注册以提高性能。但是,多阶段方法的性能受到不受单个空间尺度上复杂运动的接收场的大小的限制。我们提出了一个新的注册网络,结合了递归网络体系结构和相互注意机制,以克服这些局限性。与最先进的深度学习方法相比,基于递归结构的我们的网络达到了肺计算机断层扫描(CT)数据集的最高精度(肺部的骰子分数为92 \%,肺平均表面距离为3.8mm ),这是腹部CT数据集中最准确的结果之一,具有9个大小的器官(骰子得分为55 \%,平均表面距离为7.8mm)。我们还表明,添加3个递归网络足以达到最新结果,而没有明显增加推理时间。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译